How Cognition Could Be Computing

نویسنده

  • William J. Rapaport
چکیده

In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, the author argues that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. The author also argues that, if semiotic systems are systems that interpret signs, then both humans and computers are semiotic systems. Finally, the author suggests that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, the author takes issue with Fetzer’s arguments to the contrary. DOI: 10.4018/ijsss.2012010102 International Journal of Signs and Semiotic Systems, 2(1), 32-71, January-June 2012 33 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 2. THE PROPER TREATMENT OF COMPUTATIONALISM Computationalism is often characterized as the thesis that cognition is computation. Its origins can be traced back at least to Thomas Hobbes: “For REASON, in this sense [i.e., “as among the faculties of the mind”], is nothing but reckoning—that is, adding and subtracting—of the consequences of general names agreed upon for the marking and signifying of our thoughts...” (Hobbes 1651, Part I, Ch. 5, p. 46)2 It is a view whose popularity, if not its origins, has been traced back to McCulloch and Pitts (1943), Putnam (1960 or 1961), and Fodor (1975) (Horst, 2009; Piccinini, 2010). This is usually interpreted to mean that the mind, or the brain—whatever it is that exhibits cognition—computes, or is a computer. Consider these passages, more or less (but not entirely) randomly chosen:3 • A Plan is any hierarchical process in the organism that can control the order in which a sequence of operations is to be performed. A Plan is, for an organism, essentially the same as a program for a computer (Miller et al., 1960, p. 16).4 • [H]aving a propositional attitude is being in some computational relation to an internal representation. ...Mental states are relations between organisms and internal representations, and causally interrelated mental states succeed one another according to computational principles which apply formally to the representations (Fodor, 1975, p. 198). • [C]ognition ought to be viewed as computation. [This] rests on the fact that computation is the only worked-out view of process that is both compatible with a materialist view of how a process is realized and that attributes the behavior of the process to the operation of rules upon representations. In other words, what makes it possible to view computation and cognition as processes of fundamentally the same type is the fact that both are physically realized and both are governed by rules and representations.... (Pylyshyn, 1980, p. 111). • [C]ognition is a type of computation (Pylyshyn, 1985, p. xiii). • The basic idea of the computer model of the mind is that the mind is the program and the brain the hardware of a computational system (Searle, 1990, p. 21). • Computationalism is the hypothesis that cognition is the computation of functions. ...The job for the computationalist is to determine...which specific functions explain specific cognitive phenomena (Dietrich, 1990, p. 135, emphasis added). • [T]he Computational Theory of Mind... is...the best theory of cognition that we’ve got.... (Fodor, 2000, p. 1). • Tokens of mental processes are ‘computations’; that is, causal chains of (typically inferential) operations on mental representations (Fodor, 2008, pp. 5-6). • The core idea of cognitive science is that our brains are a kind of computer.... Psychologists try to find out exactly what kinds of programs our brains use, and how our brains implement those programs (Gopnik, 2009, p. 43). • [A] particular philosophical view that holds that the mind literally is a digital computer..., and that thought literally is a kind of computation...will be called the “Computational Theory of Mind”.... (Horst, 2009). • Computationalism...is the view that the functional organization of the brain (or any other functionally equivalent system) is computational, or that neural states are computational states (Piccinini, 2010, p. 271). • These remarkable capacities of computers—to manipulate strings of digits and to store and execute programs—suggest a bold hypothesis. Perhaps brains are computers, and perhaps minds are nothing but 34 International Journal of Signs and Semiotic Systems, 2(1), 32-71, January-June 2012 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. the programs running on neural computers (Piccinini, 2010, pp. 277-278). That cognition is computation is an interesting claim, one well worth exploring, and it may even be true. But it is too strong: It is not the kind of claim that is usually made when one says that a certain behavior can be understood computationally (Rapaport, 1998). There is a related claim that, because it is weaker, is more likely to be true and—more importantly—is equally relevant to computational theories of cognition, because it preserves the crucial insight that cognition is capable of being explained in terms of the mathematical theory of computation. Before stating what I think is the proper version of the thesis of computationalism, let me clarify two terms: 1. I will use ‘cognition’5 as a synonym for such terms as ‘thinking’, ‘intelligence’ (as in ‘AI’, not as in ‘IQ’), ‘mentality’, ‘understanding’, ‘intentionality’, etc. Cognition is whatever cognitive scientists study, including (in alphabetical order) believing (and, perhaps, knowing), consciousness, emotion, language, learning, memory, (perhaps) perception, planning, problem solving, reasoning, representation (including categories, concepts, and mental imagery), sensation, thought, etc. Knowing might not be part of cognition, insofar as it depends on the way the world is (knowing is often taken to be justified true belief) and thus would be independent of what goes on in the mind or brain; perception also depends on the way the world is (see §3.1). 2. An “algorithm” for an executor E to achieve a goal G is, informally, a procedure (or “method”) for E to achieve G, where (a) E is the agent—human or computer—that carries out the algorithm (or executes, or implements, or “follows” it), (b) the procedure is a set (usually, a sequence) of statements (or “steps”, usually “rules” or instructions), and (c) G is the solution of a (particular kind of) problem, the answer to a (particular kind of) question, or the accomplishment of some (particular kind of) task. (See the Appendix for more details.) Various of these features can be relaxed: One can imagine a procedure that has all these features of algorithms but that has no specific goal, e.g., “Compute 2+2; then read Moby Dick.”, or one for which there is no executor, or one that yields output that is only approximately correct (sometimes called a ‘heuristic’; see §6.1), etc. For alternative informal formulations of “algorithm”, see the Appendix. Several different mathematical, hence precise, formulations of this still vague notion have been proposed, the most famous of which is Alan Turing’s (1936) notion of (what is now called in his honor) a ‘Turing machine’. Because all of these precise, mathematical formulations are logically equivalent, the claim that the informal notion of “algorithm” is a Turing machine is now known as “Turing’s thesis” (or as “Church’s thesis” or the “Church-Turing thesis”, after Alonzo Church, whose “lambda calculus” was another one of the mathematical formulations). Importantly, for present purposes, when someone says that a mathematical function (see note 42) or a certain phenomenon or behavior is “computable”, they mean that there is an algorithm that outputs the values of that function when given its legal inputs6 or that produces that phenomenon or behavior—i.e., that one could write a computer program that, when executed on a suitable computer, would enable that computer to perform (i.e., to output) the appropriate behavior. Hence, Computationalism, properly understood, should be the thesis that cognition is computable, i.e., that there is an algorithm (more likely, a family of algorithms) that computes cognitive functions. I take the basic research question of computational cognitive science to ask, “How much International Journal of Signs and Semiotic Systems, 2(1), 32-71, January-June 2012 35 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. of cognition is computable?” And I take the working assumption (or expectation, or hope) of computational cognitive science to be that all cognition is computable. This formulation of the basic research question allows for the possibility that the hopes will be dashed—that some aspects of cognition might not be computable. In that event, the interesting question will be: Which aspects are not computable, and why?7 Although several philosophers have offered “non-existence proofs” that cognition is not computable,8 none of these are so mathematically convincing that they have squelched all opposition. And, in any case, it is obvious that much of cognition is computable (for surveys, see, Johnson-Laird, 1988; Edelman, 2008b; Forbus, 2010). Philip N. Johnson-Laird (1988, pp. 26-27) has expressed it well: “The goal of cognitive science is to explain how the mind works. Part of the power of the discipline resides in the theory of computability. ...Some processes in the nervous system seem to be computations.... Others...are physical processes that can be modeled in computer programs. But there may be aspects of mental life that cannot be modeled in this way.... There may even be aspects of the mind that lie outside scientific explanation.” However, I suspect that so much of cognition will eventually be shown to be computable that the residue, if any, will be negligible and ignorable. This leads to the following “implementational implication”: If (or to the extent that) cognition is computable, then anything that implements cognitive computations would be (to that extent) cognitive. Informally, such an implementation would “really think”. As Newell, Shaw, and Simon (1958, p. 153) put it (explicating Turing’s notion of the “universal” Turing machine, or stored-program computer), “if we put any particular program in a computer, we have in fact a machine that behaves in the way prescribed by the program”. The “particular program” they were referring to was one for “human problem solving”, so a computer thus programmed would indeed solve problems, i.e., exhibit a kind of cognition. This implication is probably a more general point, not necessarily restricted to computationalism. Suppose, as some would have it, that cognition turns out to be fully understandable only in terms of differential equations (Forbus, 2010, §1, hints at this but does not endorse it) or dynamic systems (van Gelder, 1995). Arguably, anything that implements cognitive differential equations or a cognitive dynamic system would be cognitive. The more common view, that cognition is computation, is a “strong” view that the mind or brain is a computer. It claims that how the mind or brain does what it does is by computing. My view, that cognition is computable, is a weaker view that what the mind or brain does can be described in computational terms, but that how it does it is a matter for neuroscience to determine.9 Interestingly, some of the canonical statements of “strong” computationalism are ambiguous between the two versions. Consider some of Fodor’s early statements in his Language of Thought (1975): “[H]aving a propositional attitude is being in some computational relation to an internal representation.” (p. 198, original emphasis) This could be interpreted as the weaker claim that the relation is computable. The passage continues: “The intended claim is that the sequence of events that causally determines the mental state of an organism will be describable as a sequence of steps in a derivation....” (p. 198, emphasis added) The use of ‘causally’ suggests the stronger—implementational—view, but the use of ‘describable as’ suggests the weaker view. There’s more: 36 International Journal of Signs and Semiotic Systems, 2(1), 32-71, January-June 2012 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. “More exactly: Mental states are relations between organisms and internal representations, and causally interrelated mental states succeed one another according to computational principles which apply formally to the representations.” (p. 198, bold emphasis added) If ‘according to’ means merely that they behave in accordance with those computational principles, then this is consistent with my— weaker—view, but if it means that they execute those principles, and then it sounds like the stronger view. Given Fodor’s other comments and the interpretations of other scholars, and in light of later statements such as the quote from 2008, I’m sure that Fodor always had the stronger view in mind. But the potentially ambiguous readings give a hint of the delicacy of interpretation.10 That cognition is computable is a necessary—but not sufficient—condition for it to be computation. The crucial difference between cognition as being computable rather than as being computation is that, on the weaker view, the implementational implication holds even if humans don’t implement cognition computationally. In other words, it allows for the possibility that human cognition is computable but is not computed. For instance, Gualtiero Piccinini (2005, 2007) has argued that “spike trains” (sequences of “action potential”) in groups of neurons—which, presumably, implement human cognition—are not representable as strings of digits, hence not computational. But this does not imply that the functions11 whose outputs they produce are not computable, possibly by different mechanisms operating on different primitive elements in a different (perhaps non-biological) medium. And Makuuchi et al. (2009, p. 8362) say: “If the processing of PSG [phrase structure grammar] is fundamental to human language, the [sic] questions about how the brain implements this faculty arise. The left pars opercularis (LPO), a posterior part of Broca’s area, was found as a neural correlate of the processing of AnBn sequences in human studies by an artificial grammar learning paradigm comprised of visually presented syllables.... These 2 studies therefore strongly suggest that LPO is a candidate brain area for the processor of PSG (i.e., hierarchical structures).” This is consistent with computability without computation. However, Makuuchi et al. (2009, p. 8365) later say: “The present study clearly demonstrates that the syntactic computations involved in the processing of syntactically complex sentences is neuroanatomically separate from the nonsyntactic VWM [verbal working memory], thus favoring the view that syntactic processes are independent of general VWM.” That is, brain locations where real computation is needed in language processing are anatomically distinct from brain locations where computation is not needed. This suggests that the brain could be computational, contra Piccinini. Similarly, David J. Lobina (2010) (Lobina & García-Albea, 2009) has argued that, although certain cognitive capabilities are recursive (another term that is sometimes used to mean “computable”), they might not be implemented in the brain in a recursive fashion. After all, algorithms that are most efficiently expressed recursively are sometimes compiled into moreefficiently executable, iterative (non-recursive) code.12 Often when we investigate some phenomenon (e.g., cognition, life, computation, flight), we begin by studying it as it occurs in nature, and then abstract away (or “ascend”) from what might be called ‘implementation details’ (Rapaport, 1999, 2005b) to arrive at a more abstract or general version of the phenomenon, from which we can “descend” to (re-)implement it in a different medium. When this occurs, the term that referred to the original (concrete) phenomenon changes its referent to the abstract phenomenon and then becomes applicable—perhaps metaphorically so—to the new (concrete) phenomenon. So, for International Journal of Signs and Semiotic Systems, 2(1), 32-71, January-June 2012 37 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. instance, flight as it occurs in birds has been reimplemented in airplanes; ‘flying’ now refers to the more abstract concept that is multiply realized in birds and planes (cf. Ford & Hayes, 1998; Rapaport, 2000, §2.2; Forbus, 2010, p. 2). And computation as done by humans in the late 19th through early 20th centuries13 was—after Turing’s analysis—reimplemented in machines; ‘computation’ now refers to the more abstract concept. Indeed, Turing’s (1936) development of (what would now be called) a computational theory of human computation seems to me to be pretty clearly the first AI program! (See the Appendix, and cf. my comments about operating systems in §6.) The same, I suggest, may (eventually) hold true for ‘cognition’ (Rapaport, 2000). (And, perhaps, for artificial “life”). As Turing (1950, p. 442; boldface emphasis added) said, “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” “General educated opinion” changes when we abstract and generalize, and “the use of words” changes when we shift reference from a word’s initial application to the more abstract or general phenomenon. Similarly, Derek Jones (2010) proposed the following “metaphysical thesis”: “Underlying biological mechanisms are irrelevant to the study of behavior/systems as such. The proper object of study is the abstract system considered as a multiply realized highlevel object.” This issue is related to a dichotomy in cognitive science over its proper object of study: Do (or should) cognitive scientists study human cognition in particular, or (abstract) cognition in general? Computational psychologists lean towards the former; computational philosophers (and AI researchers) lean towards the latter (see note 9; cf. Levesque, 2012, who identifies the former with cognitive science and the latter with AI). We see this, for example, in the shift within computational linguistics from developing algorithms for understanding natural language using “human language concepts” to developing them using statistical methods: Progress was made when it was realized that “you don’t have to do it like humans” (Lohr, 2010, quoting Alfred Spector on the research methodology of Frederick Jelinek; for further discussion of this point, see Forbus, 2010, §2). 3. SYNTACTIC SEMANTICS Three principles underlie computationalism properly treated. I call them “internalism”, “syntacticism”, and “recursive understanding”. Together, these constitute a theory of “syntactic semantics”.14 In the present essay, because of space limitations, I will primarily summarize this theory and refer the reader to earlier publications for detailed argumentation and defense (Rapaport, 1986, 1988, 1995, 1998, 1999, 2000, 2002, 2003a, 2005b, 2006, 2011).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Alcohol and Cognition; From Neurotoxicity Prevention to Cognitive Rehabilitation

Alcohol provokes different molecular pathways contributing to a clinically significant neurotoxicity in different brain areas. Affected cognition in alcoholics can influence both daily life functioning and alcohol craving management abilities. Preventing alcohol-induced neurotoxicity with different neuroprotective pharmacological interventions is receiving strong scientific backgrounds from pub...

متن کامل

O17: Social Cognition in Patients with Anxiety Disorders

Social cognition is defined as the capacity to generate, perceive and interpret and responses to the dispositions, intentions and behaviors of other people. It includes different specific cognitive processes that underlie social interactions. Four domains of social cognition are named Theory of Mind (also known as mentalizing), emotion recognition, social perception and attributional style. Anx...

متن کامل

شناخت اجتماعی در کودکان و نوجوانان نشانگان داون

  Social cognition refers to how children understand Opinions, feelings, thoughts and intentions themselves and others and how they think about social relations. Processes of social cognition, enables one to predict the behavior of others, self-control and therefore regulating social interaction. In other hand, studies indicated Children and adolescents with Down syndrome have lower performance...

متن کامل

How Neurofeedback could affect Working Memory and Processing Speed among Girl Students with Learning Disabilities

Background: Learning disabilities (LDs) are diagnosed in children impaired in the academic skills of reading, writing, and/or mathematics. Children with LDs usually exhibit a slower resting-state electroencephalogram (EEG), corresponding to a neurodevelopmental lag. The present study aimed to investigate the effectiveness of neurofeedback treatment on working memory and processing speed among g...

متن کامل

Power, Death and Love: A Trilogy for Entertainment

In this paper we review the latest understandings about what emotions are and their roles in perception, cognition and action in the context of entertainment computing. We highlight the key influence emotions have in the perception of our surrounding world, as well as in the initiation of action. Further to this we propose a model for emotions and demonstrate how it could be used for entertainm...

متن کامل

Cognitive Bubbles and Firewalls: Epistemic Immunizations in Human Reasoning

John Woods defined the epistemic bubble as the inescapable state of first-person human rationality. In this paper we will propose, at a seminal level, how the fertility of such approach can be expanded beyond the logical-epistemic dimension: the emotional sphere as well seems to be deeply affected by selfimmunization dynamics so that the cognitive bubble – rather than just epistemic – could def...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012